Dualityfree Methods for Stochastic Composition Optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Duality-free Methods for Stochastic Composition Optimization

We consider the composition optimization with two expected-value functions in the form of 1 n ∑n i=1 Fi( 1 m ∑m j=1Gj(x))+ R(x), which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding. Full Gradient or classical stochastic gradient descent based optimization algorithms are unsuitab...

متن کامل

Accelerating Stochastic Composition Optimization

Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic firstorder method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for t...

متن کامل

Variational Methods for Stochastic Optimization

In the study of graphical models, methods based on the concept of variational freeenergy bounds have been widely used for approximating functionals of probability distributions. In this paper, we provide a method based on the same principles that can be applied to problems of stochastic optimization. In particular, this method is based upon the same principles as the generalized EM algorithm. W...

متن کامل

Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

We consider the stochastic composition optimization problem proposed in [17], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVRADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(logS/S), which improves upon the O(S−4/9) r...

متن کامل

Fast Stochastic Methods for Nonsmooth Nonconvex Optimization

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonconvex part is smooth and the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tack...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Neural Networks and Learning Systems

سال: 2019

ISSN: 2162-237X,2162-2388

DOI: 10.1109/tnnls.2018.2866699